Skip to content

Conversation

@sjd210
Copy link
Contributor

@sjd210 sjd210 commented Oct 21, 2025

This is preparatory work for allowing the marks awarded for LLMFreeTextQuestions (and potentially others in the future like Parsons and those using the Python Code Editor) to be displayed external to the question page itself, such as the markbook. This PR by itself should have no visible effect on either site.

For most questions, marks are calculated as 1 if the answer is correct, and 0 if the answer is incorrect. For LLMFreeTextQuestions, marks are derived from the marksAwarded field of the question response. There is currently no straightforward meaningful way to do this for question types like Parsons without reference to the answer scheme, or Inline since it uses multiple responses.

I've tested backwards compatibility and all seems fine since we are not touching the question_attempt JSON object itself. The plan is to eventually phase out use of the correct column entirely, but for now both exist simultaneously.


Edit: Also sets the correctness criteria for LLMFreeTextQuestions to full marks rather than > 0 marks. For now, this may lead to temporary discrepancy for users on the old API but nothing breaking. For the future, we should consider how we can deal with this more broadly. Should we also be adding a maxMarks column to the database, or are we okay extracting it from the question part whenever relevant since it should be a static value.

Also also a note that the migration script copies the splitting dates from this older script, but I haven't updated the dates beyond what it includes yet.

Since an LLM-marked question may have multiple marks per question rather than a binary correct/incorrect
We previously allowed extracting all attempts for a page, but not for an individual question part. This adds that functionality.
In the context of the markbook/assignment progress, LLMFreeTextQuestionValidationResponses have a marksAwarded field that we need to extract the full question attempt to read, but we want to keep these attempts lightweight for all other question parts to minimise processing unnecessary data.

This change checks each question part and extracts the full response for only LLMFreeText ones.
@codecov
Copy link

codecov bot commented Oct 21, 2025

Codecov Report

❌ Patch coverage is 17.77778% with 37 lines in your changes missing coverage. Please review.
✅ Project coverage is 37.22%. Comparing base (e7c2116) to head (accaa39).
⚠️ Report is 80 commits behind head on main.

Files with missing lines Patch % Lines
...l/dtg/isaac/dto/QuestionValidationResponseDTO.java 7.69% 11 Missing and 1 partial ⚠️
...k/ac/cam/cl/dtg/isaac/quiz/PgQuestionAttempts.java 27.27% 7 Missing and 1 partial ⚠️
...aac/dos/LightweightQuestionValidationResponse.java 30.00% 6 Missing and 1 partial ⚠️
...c/dao/PgQuizQuestionAttemptPersistenceManager.java 0.00% 5 Missing ⚠️
...m/cl/dtg/isaac/dos/QuestionValidationResponse.java 0.00% 5 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #731      +/-   ##
==========================================
- Coverage   37.29%   37.22%   -0.08%     
==========================================
  Files         536      536              
  Lines       23709    23763      +54     
  Branches     2861     2866       +5     
==========================================
+ Hits         8843     8845       +2     
- Misses      13984    14033      +49     
- Partials      882      885       +3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@sjd210 sjd210 changed the title Propogate the marks awarded for an LLMFreeTextQuestion to gameboards Calculate new "marks" field for question attempts in the database Oct 23, 2025
@sjd210 sjd210 marked this pull request as ready for review October 23, 2025 13:47
Copy link
Member

@jsharkey13 jsharkey13 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some of these are such minor things; if you don't care about grouping marks and correct in the string and database representations, feel free to dismiss all of those comments!

This will not happen for the database itself in the first instance since columns can only be added to the end, however whenever it is flushed and rebuilt the create script will enforce the desired ordering
We will be moving away from using the field anyway, so it's probably best to keep it consistent now and update the calculation more universally
@sjd210
Copy link
Contributor Author

sjd210 commented Nov 7, 2025

Removing marksAwarded entirely (the predecessor to marks for only LLM-Marked responses) means changing the DTOs in react-app now too: isaacphysics/isaac-react-app#1822

In the subsequent week's release, we will use another database migration script to update the columns
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants